60 research outputs found

    Performance Analysis in Full-Duplex Relaying Systems withWireless Power Transfer

    Get PDF
    Energy harvesting (EH) technology has become increasingly attractive as an appealing solution to provide long-lasting power for energy-constrained wireless cooperative sensor networks. EH in such networks is particularly important as it can enable information relaying. Different from absorbing energy from intermittent and unpredictable nature, such as solar, wind, and vibration, harvesting from radio frequency (RF) radiated by ambient transmitters has received tremendous attention. The RF signal can convey both information and energy at the same time, which facilitates the development of simultaneous wireless information and power transfer. Besides, ambient RF is widely available from the base station, WIFI, and mobile phone in the current information era. However, some open issues associated with EH are existing in the state-of-art. One of the key challenges is rapid energy loss during the transferring process, especially for long-distance transmission. The other challenge is the design of protocols to optimally coordinate between information and power transmission. Meanwhile, in-band full-duplex (IBFD) communication have gained considerable attraction by researchers, which has the ability to improve system spectral efficiency. IBFD can receive information and forward information at the same time on the same frequency. Since the RF signal can be superimposed, the antenna of the IBFD system receives the RF signal from both desired transmitter and local transmitter. Due to the short distance of the local transmission signals, the received signal power is much larger than the desired transmission signals, which results in faulty receiving of the desired signals. Therefore, it is of great significance to study the local self-interference cancellation method of the IBFD system. In the recent state-of-art, three main types of self-interference cancellations are researched, which are passive cancellations, digital cancellations, and analog cancellations. In this thesis, we study polarization-enabled digital self-interference cancellation (PDC) scheme in IBFD EH systems which cancels self-interference by antenna polarization (propagation domain) and digital processing (digital domain). The theme of this thesis is to address the following two questions: how the selfinterference would be canceled in the IBFD EH system and how to optimize key performances of the system to optimal system performances. This thesis makes five research contributions in the important area of IBFD relaying systems with wireless power transfer. Their applications are primarily in the domains of the Internet of Things (IoT) and 5G-and-beyond wireless networks. The overarching objective of the thesis is to construct analytical system models and evaluate system performance (outage probability, throughput, error) in various scenarios. In all five contributions, system models and analytical expressions of the performance metrics are derived, followed by computer simulations for performance analysis

    Learning to Act Properly: Predicting and Explaining Affordances from Images

    Full text link
    We address the problem of affordance reasoning in diverse scenes that appear in the real world. Affordances relate the agent's actions to their effects when taken on the surrounding objects. In our work, we take the egocentric view of the scene, and aim to reason about action-object affordances that respect both the physical world as well as the social norms imposed by the society. We also aim to teach artificial agents why some actions should not be taken in certain situations, and what would likely happen if these actions would be taken. We collect a new dataset that builds upon ADE20k, referred to as ADE-Affordance, which contains annotations enabling such rich visual reasoning. We propose a model that exploits Graph Neural Networks to propagate contextual information from the scene in order to perform detailed affordance reasoning about each object. Our model is showcased through various ablation studies, pointing to successes and challenges in this complex task

    Is China’s green growth possible? The roles of green trade and green energy

    Get PDF
    In tandem with global initiatives to ‘go green’, China is undertaking a series of steps to achieve green economic growth. To investigate the dynamic nexus of green growth, green trade, and green energy (3 G) in China, an index is developed in this study to assess the level of provincial green growth by employing five types of indicators – economic growth, environmental pollution loss, carbon emissions loss, natural resource loss, and environmental and natural resource benefits. Then, this paper uses the SYS-GMM method to explore the influences of green trade and green energy on green growth by using data compiled from 30 provinces in China over the period 2007–2016. Furthermore, we check the potential heterogeneity, asymmetry, and internal mediating mechanism of the 3 G nexus. The main findings are highlighted as follows: (1) Green trade and green energy can accelerate China’s green growth; (2) enhancing medium- and high-technology green trade can contribute to improving local green growth; (3) this impact is heterogeneous in regions with different trade levels, and asymmetric at various quantiles for the full panel; (4) the positive investment effect, labour effect, and technical effect are effective mediators of the nexus between green trade and green growth

    Ego-Body Pose Estimation via Ego-Head Pose Estimation

    Full text link
    Estimating 3D human motion from an egocentric video sequence is critical to human behavior understanding and applications in VR/AR. However, naively learning a mapping between egocentric videos and human motions is challenging, because the user's body is often unobserved by the front-facing camera placed on the head of the user. In addition, collecting large-scale, high-quality datasets with paired egocentric videos and 3D human motions requires accurate motion capture devices, which often limit the variety of scenes in the videos to lab-like environments. To eliminate the need for paired egocentric video and human motions, we propose a new method, Ego-Body Pose Estimation via Ego-Head Pose Estimation (EgoEgo), that decomposes the problem into two stages, connected by the head motion as an intermediate representation. EgoEgo first integrates SLAM and a learning approach to estimate accurate head motion. Then, taking the estimated head pose as input, it leverages conditional diffusion to generate multiple plausible full-body motions. This disentanglement of head and body pose eliminates the need for training datasets with paired egocentric videos and 3D human motion, enabling us to leverage large-scale egocentric video datasets and motion capture datasets separately. Moreover, for systematic benchmarking, we develop a synthetic dataset, AMASS-Replica-Ego-Syn (ARES), with paired egocentric videos and human motion. On both ARES and real data, our EgoEgo model performs significantly better than the state-of-the-art.Comment: project website: https://lijiaman.github.io/projects/egoego

    Object Motion Guided Human Motion Synthesis

    Full text link
    Modeling human behaviors in contextual environments has a wide range of applications in character animation, embodied AI, VR/AR, and robotics. In real-world scenarios, humans frequently interact with the environment and manipulate various objects to complete daily tasks. In this work, we study the problem of full-body human motion synthesis for the manipulation of large-sized objects. We propose Object MOtion guided human MOtion synthesis (OMOMO), a conditional diffusion framework that can generate full-body manipulation behaviors from only the object motion. Since naively applying diffusion models fails to precisely enforce contact constraints between the hands and the object, OMOMO learns two separate denoising processes to first predict hand positions from object motion and subsequently synthesize full-body poses based on the predicted hand positions. By employing the hand positions as an intermediate representation between the two denoising processes, we can explicitly enforce contact constraints, resulting in more physically plausible manipulation motions. With the learned model, we develop a novel system that captures full-body human manipulation motions by simply attaching a smartphone to the object being manipulated. Through extensive experiments, we demonstrate the effectiveness of our proposed pipeline and its ability to generalize to unseen objects. Additionally, as high-quality human-object interaction datasets are scarce, we collect a large-scale dataset consisting of 3D object geometry, object motion, and human motion. Our dataset contains human-object interaction motion for 15 objects, with a total duration of approximately 10 hours.Comment: SIGGRAPH Asia 202

    VirtualHome: Simulating Household Activities via Programs

    Full text link
    In this paper, we are interested in modeling complex activities that occur in a typical household. We propose to use programs, i.e., sequences of atomic actions and interactions, as a high level representation of complex tasks. Programs are interesting because they provide a non-ambiguous representation of a task, and allow agents to execute them. However, nowadays, there is no database providing this type of information. Towards this goal, we first crowd-source programs for a variety of activities that happen in people's homes, via a game-like interface used for teaching kids how to code. Using the collected dataset, we show how we can learn to extract programs directly from natural language descriptions or from videos. We then implement the most common atomic (inter)actions in the Unity3D game engine, and use our programs to "drive" an artificial agent to execute tasks in a simulated household environment. Our VirtualHome simulator allows us to create a large activity video dataset with rich ground-truth, enabling training and testing of video understanding models. We further showcase examples of our agent performing tasks in our VirtualHome based on language descriptions.Comment: CVPR 2018 (Oral
    corecore